DisenCite: Graph-Based Disentangled Representation Learning for Context-Specific Citation Generation

نویسندگان

چکیده

Citing and describing related literature are crucial to scientific writing. Many existing approaches show encouraging performance in citation recommendation, but unable accomplish the more challenging onerous task of text generation. In this paper, we propose a novel disentangled representation based model DisenCite automatically generate through integrating paper graph. A key novelty our method compared with is context-specific text, empowering generation different types citations for same paper. particular, first build make available graph enhanced contextual dataset (GCite) 25K edges characterized by contained sections over 4.8K research papers. Based on dataset, encode each according both textual contexts structure information heterogeneous The resulted representations then mutual regularization between its neighbors Extensive experiments demonstrate superior comparing state-of-the-art approaches. We further conduct ablation case studies reassure that improvement comes from generating incorporating

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Linear Disentangled Representation Learning for Facial Actions

Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features. However, a linear model with just several parameters normally is not demanding in terms of training data. In this paper, we propose an elegant linear model to untangle confounding factors in challenging realistic m...

متن کامل

Paper2vec: Citation-Context Based Document Distributed Representation for Scholar Recommendation

Due to the availability of references of research papers and the rich information contained in papers, various citation analysis approaches have been proposed to identify similar documents for scholar recommendation. Despite of the success of previous approaches, they are, however, based on co-occurrence of items. Once there are no co-occurrence items available in documents, they will not work ...

متن کامل

Disentangled Person Image Generation

Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel pers...

متن کامل

Detach and Adapt: Learning Cross-Domain Disentangled Deep Representation

The recent progress and development of deep generative models have led to remarkable improvements in research topics in computer vision and machine learning. In this paper, we address the task of cross-domain feature disentanglement. We advance the idea of unsupervised domain adaptation and propose to perform joint feature disentanglement and adaptation. Based on generative adversarial networks...

متن کامل

Graph-based Isometry Invariant Representation Learning

Learning transformation invariant representations of visual data is an important problem in computer vision. Deep convolutional networks have demonstrated remarkable results for image and video classification tasks. However, they have achieved only limited success in the classification of images that undergo geometric transformations. In this work we present a novel Transformation Invariant Gra...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i10.21397